six-month pause
AI Is as Risky as Pandemics and Nuclear War, Top CEOs Say, Urging Global Cooperation
The CEOs of the world's leading artificial intelligence companies, along with hundreds of other AI scientists and experts, made their most unified statement yet about the existential risks to humanity posed by the technology, in a short open letter released Tuesday. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the letter, released by California-based non-profit the Center for AI Safety, says in its entirety. The CEOs of what are widely seen as the three most cutting-edge AI labs--Sam Altman of OpenAI, Demis Hassabis of DeepMind, and Dario Amodei of Anthropic--are all signatories to the letter. So is Geoffrey Hinton, a man widely acknowledged to be the "godfather of AI," who made headlines last month when he stepped down from his position at Google and warned of the risks AI posed to humanity. Read More: DeepMind's CEO Helped Take AI Mainstream.
How to rein in the AI threat? Let the lawyers loose
Log Off Movement CEO Emma Lembke and teacher Matt Miles discuss the impact of artificial intelligence on kids on'The Story.' Fifty-five percent of Americans are worried by the threat of AI to the future of humanity, according to a recent Monmouth University poll. More than 1,000 AI experts and funders, including Elon Musk and Steve Wozniak, signed a letter calling for a six-month pause in training new AI models. In turn, Time published an article calling for a permanent global ban. However, the problem with these proposals is that they require coordination of numerous stakeholders from a wide variety of companies and government figures. Let me share a more modest proposal that's much more in line with our existing methods of reining in potentially threatening developments: legal liability.
AI: Creative destruction does not only destroy. So don't kill the Chatbots!
The Financial Times recently found that ChatGPT (along with Bard, Google's own experimental chatbot) can tell a joke at least passably well, write an advertising slogan, make stock picks, and imagine a conversation between Xi Jinping and Vladimir Putin. It is understandable that a new technology with such seemingly vast powers would raise concerns. But much of the distress is misplaced. AI's detractors tend to understate the pace of technological change that advanced economies have already been living through. In 1970, US employment was roughly evenly divided across occupations, with low-skill, medium-skill and high-skill jobs accounting for, respectively, 31 per cent, 38 per cent, and 30 per cent of total hours worked.
The Drum
The letter, titled "Pause giant AI experiments" and published on March 22 by the Future of Life Institute, a nonprofit organization concerned with mitigating existential risks facing humanity – and especially such risks associated with AI – makes the case that AI research is outpacing our ability to implement protective guardrails. Leading AI companies have become engaged in a dangerous arms race, the authors of the open letter argue, inexorably leading the world to "ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control." The letter urges caution, as opposed to a careless march into a potentially dangerous future: "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the authors write. The letter also entreats "all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4." As of Monday morning, the letter has received more than 3,100 signatures.
- North America > Canada (0.05)
- Europe > Italy (0.05)
- Europe > Estonia > Harju County > Tallinn (0.05)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.75)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.74)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.31)
Pausing AI development would 'simply benefit China,' warns former Google CEO Eric Schmidt
Eric Schmidt says the six-month moratorium on AI development supported by Elon Musk, Steve Wozniak, and other tech leaders would "simply benefit China" and called instead for tighter regulation. The former Google CEO told the Australian Financial Review he was worried about the rapid development of AI and that "concerns could be understated." "I think ... things could be worse than people are saying," he said, noting that as large language models get bigger they have "emergent behaviour we don't understand." The use of generative AI has exploded in recent months with the launch of OpenAI's ChatGPT, Google's Bard, and Microsoft's AI-powered Bing, as well as image-generating platforms such as DALL-E and Midjourney. People have been using generative AI in both their personal and professional lives to write essays, think up recipes, summarize emails, publish articles, and craft résumés and cover letters.
Bill Gates says pausing the development of AI systems will not 'solve' challenges ahead, days after Musk and others cautioned about 'risks to society' from the tech
Two of the biggest names in tech seem to be disagreeing over artificial intelligence, or AI. While Elon Musk called for a six-month pause on the advanced development of AI and to take a step back from a "dangerous race," Bill Gates doesn't think this is the way to go. "I don't think asking one particular group to pause solves the challenges," the Microsoft co-founder told Reuters in an interview Monday. He made the comment a week after Elon Musk and 1,125 others, including AI experts, signed an open letter calling for a six-month pause on advanced development of the tech. The letter, issued by the non-profit Future of Life Institute, has garnered about 9,400 signatures so far.
Tech CEO warns AI risks 'human extinction' as experts rally behind six-month pause
Fox News correspondent Matt Finn has the latest on the impact of AI technology that some say could outpace humans on'Special Report.' One of the tech CEOs who signed a letter calling for a six-month pause on AI labs training powerful systems warned that such technology threatens "human extinction." "As stated by many, including these model's developers, the risk is human extinction," Connor Leahy, CEO of Conjecture, a company that describes itself as working to make "AI systems boundable, predictable and safe," told Fox News Digital this week. Leahy is one of more than 2,000 experts and tech leaders who signed a letter this week calling for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." The letter is backed by Tesla and Twitter CEO Elon Musk, as well as Apple co-founder Steve Wozniak, and argues that "AI systems with human-competitive intelligence can pose profound risks to society and humanity."
- North America > United States > New York (0.05)
- North America > United States > California > Orange County > Laguna Beach (0.05)
- Asia > China (0.05)
Tech leaders and AI experts demand a six-month pause on 'out-of-control' AI experiments
An open letter signed by tech leaders and prominent AI researchers has called for AI labs and companies to "immediately pause" their work. Signatories like Steve Wozniak and Elon Musk agree risks warrant a minimum six month break from producing technology beyond GPT-4 to enjoy existing AI systems, allow people to adjust and ensure they are benefiting everyone. The letter adds that care and forethought are necessary to ensure the safety of AI systems -- but are being ignored. The reference to GPT-4, a model by OpenAI that can respond with text to written or visual messages, comes as companies race to build complex chat systems that utilize the technology. Microsoft, for example, recently confirmed that its revamped Bing search engine has been powered by the GPT-4 model for over seven weeks, while Google recently debuted Bard, its own generative AI system powered by LaMDA.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.58)